trustworthy artificial intelligence
Executive order on safe, secure, and trustworthy artificial intelligence
President Biden today issued an Executive Order on "Safe, Secure, and Trustworthy Artificial Intelligence". A fact sheet from the White House states that the order "establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more."
Experts call Biden executive order on AI a 'first step,' but some express doubts
Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' President Biden is expected to unveil an executive order (EO) regulating artificial intelligence, a step long called for by some experts. "I applaud the administration for taking the first step," Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), told Fox News Digital. "We should applaud the first step through the EO but quickly need a framework for the detailed steps beyond that truly safeguard our freedoms." Siegel's comments come after The Washington Post reported Wednesday on Biden administration plans for an executive order on AI, which the paper called the "most significant attempt" the government has so far made to regular a technology that has been advancing at a seemingly rapid pace. The move follows through on Biden's pledge earlier this year, when he vowed executive action that would ensure "America leads the way toward responsible AI innovation."
- North America > United States (1.00)
- Asia > China (0.05)
- Media > News (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Díaz-Rodríguez, Natalia, Del Ser, Javier, Coeckelbergh, Mark, de Prado, Marcos López, Herrera-Viedma, Enrique, Herrera, Francisco
Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.
DARPA To Host Workshops For Trustworthy Artificial Intelligence - Potomac Officers Club
The Defense Advanced Research Projects Agency plans to conduct two workshops in 2023, aiming to convene academic, commercial and government experts to foster discussions on developing trustworthy artificial intelligence for national security purposes. DARPA's Information Innovation Office will host a virtual workshop from June 13 to 16 and an in-person workshop in Boston, Massachusetts, from July 31 to Aug. 2 as part of its AI Forward initiative. Each event will be limited to 100 attendees. Interested individuals are tasked with submitting an executive summary by Mar. According to DARPA, research efforts need to be directed toward foundational theory, engineering and human-AI teaming to delimit the scope of AI systems, ensure their real-world functionality and make them trustworthy partners for people.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Deloitte Launches Artificial Intelligence Initiative - AI Summary
The DIAL program will leverage the university's research in data analytics and artificial intelligence, along with Deloitte's experience in AI-enabled services for clients in the private and public sectors. "The DIAL program enables Smith and Deloitte to continue their critical collaborations at the forefront of cutting-edge research and emerging technology," said Wedad Elmaghraby, dean's professor of operations management, in a statement last week. "This includes partnering with local industry and federal partners to drive innovation for the public good, creatively pushing our students to embrace analytics challenges in new and unexplored areas of importance, and investing in our understanding of ethical, trustworthy artificial intelligence to further its potential promise." "Alongside our longstanding work with the University of Maryland, DIAL will help provide policymakers, industry leaders, researchers and the broader public with a deeper understanding of artificial intelligence," said Darren Schneider, a principal at Deloitte Consulting LLP, in a statement. It will also examine how government agencies can overcome barriers to AI and use this technology to advance diversity, equity and inclusion, as well as administration priorities at the enterprise level.
The importance of trustworthy Artificial Intelligence
Artificial Intelligence (AI) is having an increasing presence in our everyday lives, and this is believed to be only the beginning. For this to continue, however, it must be ensured that AI is trustworthy in all scenarios. To assist in this endeavour, Linköping University (LiU) is co-ordinating TAILOR, an EU project that has developed a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future. 'TAILOR' is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation, and reasoning.
The Need for Ethical, Responsible, and Trustworthy Artificial Intelligence for Environmental Sciences
McGovern, Amy, Ebert-Uphoff, Imme, Gagne, David John II, Bostrom, Ann
Given the growing use of Artificial Intelligence (AI) and machine learning (ML) methods across all aspects of environmental sciences, it is imperative that we initiate a discussion about the ethical and responsible use of AI. In fact, much can be learned from other domains where AI was introduced, often with the best of intentions, yet often led to unintended societal consequences, such as hard coding racial bias in the criminal justice system or increasing economic inequality through the financial system. A common misconception is that the environmental sciences are immune to such unintended consequences when AI is being used, as most data come from observations, and AI algorithms are based on mathematical formulas, which are often seen as objective. In this article, we argue the opposite can be the case. Using specific examples, we demonstrate many ways in which the use of AI can introduce similar consequences in the environmental sciences. This article will stimulate discussion and research efforts in this direction. As a community, we should avoid repeating any foreseeable mistakes made in other domains through the introduction of AI. In fact, with proper precautions, AI can be a great tool to help {\it reduce} climate and environmental injustice. We primarily focus on weather and climate examples but the conclusions apply broadly across the environmental sciences.
- North America > United States > Colorado (0.04)
- Oceania > Australia (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (8 more...)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.68)
- Law > Criminal Law (0.68)
- (2 more...)
Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities
Pery, Andrew, Rafiei, Majid, Simon, Michael, van der Aalst, Wil M. P.
The premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks. Moreover, there are complexities associated with meeting the specific dimensions of Trustworthy AI best practices such as data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality requirements. These processes involve multiple steps, hand-offs, re-works, and human-in-the-loop oversight. In this paper, we demonstrate that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution, surfacing compliance bottlenecks, and providing for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Hawaii (0.04)
- North America > Canada > Ontario > National Capital Region > Ottawa (0.04)
- (2 more...)
- Information Technology > Security & Privacy (0.68)
- Law > Statutes (0.49)
- Government > Regional Government (0.46)
PERSEUS - PhD Candidate in Trustworthy Artificial Intelligence
Building trustworthy AI systems is a cornerstone to apply AI technologies in practice and therefore we need to explore methods, build tools, and incorporate different perspectives when developing novel AI applications. Based on the definition of the High Level Expert Group of the European Union, trustworthy AI should be lawful, ethical, and robust. While guidelines exist, research on implementation methodologies are currently under development and this PhD position will contribute to develop principles for creating trustworthy AI applications. Together with partners in the NorwAI SFI the candidate will work on the creation for guidelines for a sustainable and beneficial use of AI, explore privacy-preserving technologies and create explainable, interpretable and transparent prototypes to be tested in industrial settings. This PhD project is a part of the PERSEUS doctoral programme: A collaboration between NTNU- Norway's largest university, 11 top-level academic partners in 8 European countries, and 8 industrial partners within sectors of high societal relevance.
Promoting the Use of Trustworthy Artificial Intelligence in Government
Artificial intelligence promises to drive the growth of the United States economy and improve the quality of life of all Americans. On December 3, 2020, President Donald J. Trump signed the Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which establishes guidance for Federal agency adoption of Artificial Intelligence (AI) to more effectively deliver services to the American people and foster public trust in this critical technology. This order recognizes the potential for AI to improve government operations, such as by reducing outdated or duplicative regulations, enhancing the security of Federal information systems, and streamlining application processes. It also directs agencies to ensure that the design, development, acquisition, and use of AI is done in a manner that protects privacy, civil rights, civil liberties, and American values. The Executive Order (EO) underscores the Trump Administration's commitment to accelerating Federal adoption of AI, modernizing government, cultivating public trust in AI, and exemplifying world leadership in the use of trustworthy AI.
- North America > United States (1.00)
- Europe > Ukraine (0.40)